Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 4 de 4
1.
Sensors (Basel) ; 22(13)2022 Jul 03.
Article En | MEDLINE | ID: mdl-35808523

In emergent technologies, data integrity is critical for message-passing communications, where security measures and validations must be considered to prevent the entrance of invalid data, detect errors in transmissions, and prevent data loss. The SHA-256 algorithm is used to tackle these requirements. Current hardware architecture works present issues regarding real-time balance among processing, efficiency and cost, because some of them introduce significant critical paths. Besides, the SHA-256 algorithm itself considers no verification mechanisms for internal calculations and failure prevention. Hardware implementations can be affected by diverse problems, ranging from physical phenomena to interference or faults inherent to data spectra. Previous works have mainly addressed this problem through three kinds of redundancy: information, hardware, or time. To the best of our knowledge, pipelining has not been previously used to perform different hash calculations with a redundancy topic. Therefore, in this work, we present a novel hybrid architecture, implemented on a 3-stage pipeline structure, which is traditionally used to improve performance by simultaneously processing several blocks; instead, we propose using a pipeline technique for implementing hardware and time redundancies, analyzing hardware resources and performance to balance the critical path. We have improved performance at a certain clock speed, defining a data flow transformation in several sequential phases. Our architecture reported a throughput of 441.72 Mbps and 2255 LUTs, and presented an efficiency of 195.8 Kbps/LUT.

2.
Sensors (Basel) ; 22(7)2022 Mar 24.
Article En | MEDLINE | ID: mdl-35408115

The latest generation of communication networks, such as SDVN (Software-defined vehicular network) and VANETs (Vehicular ad-hoc networks), should evaluate their communication channels to adapt their behavior. The quality of the communication in data networks depends on the behavior of the transmission channel selected to send the information. Transmission channels can be affected by diverse problems ranging from physical phenomena (e.g., weather, cosmic rays) to interference or faults inherent to data spectra. In particular, if the channel has a good transmission quality, we might maximize the bandwidth use. Otherwise, although fault-tolerant schemes degrade the transmission speed by solving errors or failures should be included, these schemes spend more energy and are slower due to requesting lost packets (recovery). In this sense, one of the open problems in communications is how to design and implement an efficient and low-power-consumption mechanism capable of sensing the quality of the channel and automatically making the adjustments to select the channel over which transmit. In this work, we present a trade-off analysis based on hardware implementation to identify if a channel has a low or high quality, implementing four machine learning algorithms: Decision Trees, Multi-Layer Perceptron, Logistic Regression, and Support Vector Machines. We obtained the best trade-off with an accuracy of 95.01% and efficiency of 9.83 Mbps/LUT (LookUp Table) with a hardware implementation of a Decision Tree algorithm with a depth of five.

3.
Sensors (Basel) ; 22(8)2022 Apr 13.
Article En | MEDLINE | ID: mdl-35458970

Cryptography has become one of the vital disciplines for information technology such as IoT (Internet Of Things), IIoT (Industrial Internet Of Things), I4.0 (Industry 4.0), and automotive applications. Some fundamental characteristics required for these applications are confidentiality, authentication, integrity, and nonrepudiation, which can be achieved using hash functions. A cryptographic hash function that provides a higher level of security is SHA-3. However, in real and modern applications, hardware implementations based on FPGA for hash functions are prone to errors due to noise and radiation since a change in the state of a bit can trigger a completely different hash output than the expected one, due to the avalanche effect or diffusion, meaning that modifying a single bit changes most of the desired bits of the hash; thus, it is vital to detect and correct any error during the algorithm execution. Current hardware solutions mainly seek to detect errors but not correct them (e.g., using parity checking or scrambling). To the best of our knowledge, there are no solutions that detect and correct errors for SHA-3 hardware implementations. This article presents the design and a comparative analysis of four FPGA architectures: two without fault tolerance and two with fault tolerance, which employ Hamming Codes to detect and correct faults for SHA-3 using an Encoder and a Decoder at the step-mapping functions level. Results show that the two hardware architectures with fault tolerance can detect up to a maximum of 120 and 240 errors, respectively, for every run of KECCAK-p, which is considered the worst case. Additionally, the paper provides a comparative analysis of these architectures with other works in the literature in terms of experimental results such as frequency, resources, throughput, and efficiency.

4.
Sensors (Basel) ; 20(11)2020 Jun 02.
Article En | MEDLINE | ID: mdl-32498271

The electrocardiogram records the heart's electrical activity and generates a significant amount of data. The analysis of these data helps us to detect diseases and disorders via heart bio-signal abnormality classification. In unbalanced-data contexts, where the classes are not equally represented, the optimization and configuration of the classification models are highly complex, reflecting on the use of computational resources. Moreover, the performance of electrocardiogram classification depends on the approach and parameter estimation to generate the model with high accuracy, sensitivity, and precision. Previous works have proposed hybrid approaches and only a few implemented parameter optimization. Instead, they generally applied an empirical tuning of parameters at a data level or an algorithm level. Hence, a scheme, including metrics of sensitivity in a higher precision and accuracy scale, deserves special attention. In this article, a metaheuristic optimization approach for parameter estimations in arrhythmia classification from unbalanced data is presented. We selected an unbalanced subset of those databases to classify eight types of arrhythmia. It is important to highlight that we combined undersampling based on the clustering method (data level) and feature selection method (algorithmic level) to tackle the unbalanced class problem. To explore parameter estimation and improve the classification for our model, we compared two metaheuristic approaches based on differential evolution and particle swarm optimization. The final results showed an accuracy of 99.95%, a F1 score of 99.88%, a sensitivity of 99.87%, a precision of 99.89%, and a specificity of 99.99%, which are high, even in the presence of unbalanced data.


Arrhythmias, Cardiac , Electrocardiography , Signal Processing, Computer-Assisted , Algorithms , Arrhythmias, Cardiac/classification , Arrhythmias, Cardiac/diagnosis , Cluster Analysis , Databases, Factual , Humans
...